Product Code Database
Example Keywords: socks -jewel $48
   » » Wiki: Basel Problem
Tag Wiki 'Basel Problem'.
Tag

The Basel problem is a problem in mathematical analysis with relevance to , concerning an infinite sum of inverse squares. It was first posed by in 1650 and solved by in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. E41 – De summis serierum reciprocarum Since the problem had withstood the attacks of the leading of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after the city of , hometown of Euler as well as of the who unsuccessfully attacked the problem.

The Basel problem asks for the precise of the reciprocals of the of the , i.e. the precise sum of the infinite series: \sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots.

The sum of the series is approximately equal to 1.644934. The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be {\pi^2}/{6} and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced an accepted proof in 1741.

The solution to this problem can be used to estimate the probability that two large are . Two random integers in the range from 1 to , in the limit as goes to infinity, are relatively prime with a probability that approaches {6}/{\pi^2}, the reciprocal of the solution to the Basel problem.


Euler's approach
Euler's original derivation of the value {\pi^2}/{6} essentially extended observations about finite and assumed that these same properties hold true for infinite series.

Of course, Euler's original reasoning requires justification (100 years later, proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.

To follow Euler's argument, recall the expansion of the sine function \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots Dividing through by gives \frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots .

The Weierstrass factorization theorem shows that the right-hand side is the product of linear factors given by its roots, just as for finite polynomials. Euler assumed this as a for expanding an infinite degree in terms of its roots, but in fact it is not always true for general P(x).A priori, since the left-hand-side is a (of infinite degree) we can write it as a product of its roots as \begin{align} \sin(x) & = x (x^2-\pi^2)(x^2-4\pi^2)(x^2-9\pi^2) \cdots \\

    & = Ax \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots.
     
\end{align} Then since we know from elementary that \lim_{x \rightarrow 0} \frac{\sin(x)}{x} = 1, we conclude that the leading constant must satisfy A = 1. This factorization expands the equation into: \begin{align} \frac{\sin x}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\
                   &= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots
     
\end{align}

If we formally multiply out this product and collect all the terms (we are allowed to do so because of Newton's identities), we see by induction that the coefficient of is In particular, letting H_n^{(2)} := \sum_{k=1}^n k^{-2} denote a generalized second-order harmonic number, we can easily prove by induction that x^2 \prod_{k=1}^{n} \left(1-\frac{x^2}{\pi^2}\right) = -\frac{H_n^{(2)}}{\pi^2} \rightarrow -\frac{\zeta(2)}{\pi^2} as n \rightarrow \infty. -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.

But from the original infinite series expansion of , the coefficient of is . These two coefficients must be equal; thus, -\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.

Multiplying both sides of this equation by −2 gives the sum of the reciprocals of the positive square integers. \sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.


Generalizations of Euler's method using elementary symmetric polynomials
Using formulae obtained from elementary symmetric polynomials,Cf., the formulae for generalized Stirling numbers proved in: this same approach can be used to enumerate formulae for the even-indexed which have the following known formula expanded by the Bernoulli numbers: \zeta(2n) = \frac{(-1)^{n-1} (2\pi)^{2n}}{2 \cdot (2n)!} B_{2n}.

For example, let the partial product for \sin(x) expanded as above be defined by \frac{S_n(x)}{x} = \prod\limits_{k=1}^n \left(1 - \frac{x^2}{k^2 \cdot \pi^2}\right). Then using known formulas for elementary symmetric polynomials (a.k.a., Newton's formulas expanded in terms of identities), we can see (for example) that \begin{align} \leftx^4\right \frac{S_n(x)}{x} & = \frac{1}{2\pi^4}\left(\left(H_n^{(2)}\right)^2 - H_n^{(4)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{2\pi^4}\left(\zeta(2)^2-\zeta(4)\right) \\4pt & \qquad \implies \zeta(4) = \frac{\pi^4}{90} = -2\pi^4 \cdot x^4 \frac{\sin(x)}{x} +\frac{\pi^4}{36} \\8pt \leftx^6\right \frac{S_n(x)}{x} & = -\frac{1}{6\pi^6}\left(\left(H_n^{(2)}\right)^3 - 3H_n^{(2)} H_n^{(4)} + 2H_n^{(6)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{6\pi^6}\left(\zeta(2)^3-3\zeta(2)\zeta(4) + 2\zeta(6)\right) \\4pt & \qquad \implies \zeta(6) = \frac{\pi^6}{945} = -3 \cdot \pi^6 x^6 \frac{\sin(x)}{x} - \frac{2}{3} \frac{\pi^2}{6} \frac{\pi^4}{90} + \frac{\pi^6}{216}, \end{align}

and so on for subsequent coefficients of x^{2k} \frac{S_n(x)}{x}. There are other forms of Newton's identities expressing the (finite) power sums H_n^{(2k)} in terms of the elementary symmetric polynomials, e_i \equiv e_i\left(-\frac{\pi^2}{1^2}, -\frac{\pi^2}{2^2}, -\frac{\pi^2}{3^2}, -\frac{\pi^2}{4^2}, \ldots\right), but we can go a more direct route to expressing non-recursive formulas for \zeta(2k) using the method of elementary symmetric polynomials. Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by (-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n),

which in our situation equates to the limiting recurrence relation (or generating function convolution, or ) expanded as \frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -x^{2k} \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i.

Then by differentiation and rearrangement of the terms in the previous equation, we obtain that \zeta(2k) = x^{2k}\frac{1}{2}\left(1-\pi x\cot(\pi x)\right).


Consequences of Euler's proof
By the above results, we can conclude that \zeta(2k) is always a multiple of \pi^{2k}. In particular, since \pi and integer powers of it are transcendental, we can conclude at this point that \zeta(2k) is , and more precisely, transcendental for all k \geq 1. By contrast, the properties of the odd-indexed , including Apéry's constant \zeta(3), are almost completely unknown.


The Riemann zeta function
The Riemann zeta function is one of the most significant functions in mathematics because of its relationship to the distribution of the . The zeta function is defined for any with real part greater than 1 by the following formula: \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}.

Taking , we see that is equal to the sum of the reciprocals of the squares of all positive integers: \zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2}

               = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6} \approx 1.644934.
     

Convergence can be proven by the integral test, or by the following inequality: \begin{align}

 \sum_{n=1}^N \frac{1}{n^2} & < 1 + \sum_{n=2}^N \frac{1}{n(n-1)} \\
                            & = 1 + \sum_{n=2}^N \left( \frac{1}{n-1} - \frac{1}{n} \right) \\
                            & = 1 + 1 - \frac{1}{N} \;{\stackrel{N \to \infty}{\longrightarrow}}\; 2.
     
\end{align}

This gives us the 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that has a simple expression in terms of the whenever is a positive even integer. With : \zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}.


A proof using Euler's formula and L'Hôpital's rule
The normalized \text{sinc}(x)=\frac{\sin (\pi x)}{\pi x} has a Weierstrass factorization representation as an infinite product: \frac{\sin (\pi x)}{\pi x} = \prod_{n=1}^\infty \left(1-\frac{x^2}{n^2}\right).

The infinite product is analytic, so taking the natural logarithm of both sides and differentiating yields \frac{\pi \cos (\pi x)}{\sin (\pi x)}-\frac{1}{x}=-\sum_{n=1}^\infty \frac{2x}{n^2-x^2}

(by uniform convergence, the interchange of the derivative and infinite series is permissible). After dividing the equation by 2x and regrouping one gets \frac{1}{2x^2}-\frac{\pi \cot (\pi x)}{2x}=\sum_{n=1}^\infty \frac{1}{n^2-x^2}.

We make a change of variables (x=-it): -\frac{1}{2t^2}+\frac{\pi \cot (-\pi it)}{2it}=\sum_{n=1}^\infty \frac{1}{n^2+t^2}.

Euler's formula can be used to deduce that \frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2it}\frac{i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}=\frac{\pi}{2t}+\frac{\pi}{t\left(e^{2\pi t} - 1\right)}. or using the corresponding hyperbolic function: \frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2t}{i\cot (\pi i t)}=\frac{\pi}{2t}\coth(\pi t).

Then \sum_{n=1}^\infty \frac{1}{n^2+t^2}=\frac{\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^2 e^{2\pi t}-t^2\right)}=-\frac{1}{2t^2} + \frac{\pi}{2t} \coth(\pi t).

Now we take the limit as t approaches zero and use L'Hôpital's rule thrice. By Tannery's theorem applied to \lim_{t\to\infty}\sum_{n=1}^\infty 1/(n^2+1/t^2), we can interchange the limit and infinite series so that \lim_{t\to 0}\sum_{n=1}^\infty 1/(n^2+t^2)=\sum_{n=1}^\infty 1/n^2 and by L'Hôpital's rule \begin{align}\sum_{n=1}^\infty \frac{1}{n^2}&=\lim_{t\to 0}\frac{\pi}{4}\frac{2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^2 e^{2\pi t} + te^{2\pi t}-t}\\6pt &=\lim_{t\to 0}\frac{\pi^3 te^{2\pi t}}{2\pi \left(\pi t^2 e^{2\pi t}+2te^{2\pi t} \right)+e^{2\pi t}-1}\\6pt &=\lim_{t\to 0}\frac{\pi^2 (2\pi t+1)}{4\pi^2 t^2+12\pi t+6}\\6pt &=\frac{\pi^2}{6}.\end{align}


A proof using Fourier series
Use Parseval's identity (applied to the function ) to obtain \sum_{n=-\infty}^\infty |c_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 \, dx, where \begin{align}
 c_n &= \frac{1}{2\pi}\int_{-\pi}^\pi x e^{-inx} \, dx \\[4pt]
     &= \frac{n\pi \cos(n\pi)-\sin(n\pi)}{\pi n^2} i \\[4pt]
     &= \frac{\cos(n\pi)}{n} i \\[4pt]
     &= \frac{(-1)^n}{n} i
     
\end{align}

for , and . Thus, |c_n|^2 = \begin{cases} \dfrac{1}{n^2}, & \text{for } n \neq 0, \\ 0, & \text{for } n = 0, \end{cases}

and \sum_{n=-\infty}^\infty |c_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 \, dx.

Therefore, \sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 \, dx = \frac{\pi^2}{6} as required.


Another proof using Parseval's identity
Given a complete orthonormal basis in the space L^2_{\operatorname{per}}(0, 1) of L2 periodic functions over (0, 1) (i.e., the subspace of square-integrable functions which are also periodic), denoted by \{e_i\}_{i=-\infty}^{\infty}, Parseval's identity tells us that \|x\|^2 = \sum_{i=-\infty}^{\infty} |\langle e_i, x\rangle|^2,

where \|x\| := \sqrt{\langle x,x\rangle} is defined in terms of the on this given by \langle f, g\rangle = \int_0^1 f(x) \overline{g(x)} \, dx,\ f,g \in L^2_{\operatorname{per}}(0, 1).

We can consider the orthonormal basis on this space defined by e_k \equiv e_k(\vartheta) := \exp(2\pi\imath k \vartheta) such that \langle e_k,e_j\rangle = \int_0^1 e^{2\pi\imath (k-j) \vartheta} \, d\vartheta = \delta_{k,j}. Then if we take f(\vartheta) := \vartheta, we can compute both that \begin{align} \|f\|^2 & = \int_0^1 \vartheta^2 \, d\vartheta = \frac{1}{3} \\ \langle f, e_k\rangle & = \int_0^1 \vartheta e^{-2\pi\imath k\vartheta} \, d\vartheta = \Biggl\{\begin{array}{ll} \frac{1}{2}, & k = 0 \\ -\frac{1}{2\pi\imath k} & k \neq 0, \end{array} \end{align}

by elementary and integration by parts, respectively. Finally, by Parseval's identity stated in the form above, we obtain that \begin{align} \|f\|^2 = \frac{1}{3} & = \sum_{\stackrel{k=-\infty}{k \neq 0}}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4}

    = 2 \sum_{k=1}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} \\
    & \implies \frac{\pi^2}{6} = \frac{2 \pi^2}{3} - \frac{\pi^2}{2} = \zeta(2).
     
\end{align}


Generalizations and recurrence relations
Note that by considering higher-order powers of f_j(\vartheta) := \vartheta^j \in L^2_{\operatorname{per}}(0, 1) we can use integration by parts to extend this method to enumerating formulas for \zeta(2j) when j > 1. In particular, suppose we let I_{j,k} := \int_0^1 \vartheta^j e^{-2\pi\imath k\vartheta} \, d\vartheta,

so that integration by parts yields the recurrence relation that \begin{align} I_{j,k} & = \begin{cases} \frac{1}{j+1}, & k=0; \\4pt -\frac{1}{2\pi\imath \cdot k} + \frac{j}{2\pi\imath \cdot k} I_{j-1,k}, & k \neq 0\end{cases} \\6pt

    & = \begin{cases} \frac{1}{j+1}, & k=0; \\[4pt] -\sum\limits_{m=1}^j \frac{j!}{(j+1-m)!} \cdot \frac{1}{(2\pi\imath \cdot k)^m}, & k \neq 0. \end{cases}
     
\end{align}

Then by applying Parseval's identity as we did for the first case above along with the linearity of the yields that \begin{align} \|f_j\|^2 = \frac{1}{2j+1} & = 2 \sum_{k \geq 1} I_{j,k} \bar{I}_{j,k} + \frac{1}{(j+1)^2} \\6pt

    & = 2 \sum_{m=1}^j \sum_{r=1}^j \frac{j!^2}{(j+1-m)! (j+1-r)!} \frac{(-1)^r}{\imath^{m+r}} \frac{\zeta(m+r)}{(2\pi)^{m+r}} + \frac{1}{(j+1)^2}.
     
\end{align}


Proof using differentiation under the integral sign
It's possible to prove the result using elementary calculus by applying the differentiation under the integral sign technique to an integral due to Freitas: I(\alpha) = \int_0^\infty \ln\left(1+\alpha e^{-x}+e^{-2x}\right)dx.

While the primitive function of the integrand cannot be expressed in terms of elementary functions, by differentiating with respect to \alpha we arrive at

\frac{dI}{d\alpha} = \int_0^\infty \frac{e^{-x}}{1+\alpha e^{-x}+e^{-2x}}dx, which can be integrated by substituting u=e^{-x} and decomposing into partial fractions. In the range -2\leq\alpha\leq 2 the definite integral reduces to

\frac{dI}{d\alpha} = \frac{2}{\sqrt{4-\alpha^2}}\left\arctan\left(\frac{\alpha+2}{\sqrt{4-\alpha^2}}\right)-\arctan\left(\frac{\alpha}{\sqrt{4-\alpha^2}}\right)\right.

The expression can be simplified using the arctangent addition formula and integrated with respect to \alpha by means of trigonometric substitution, resulting in

I(\alpha) = -\frac{1}{2}\arccos\left(\frac{\alpha}{2}\right)^2 + c.

The integration constant c can be determined by noticing that two distinct values of I(\alpha) are related by

I(2) = 4I(0), because when calculating I(2) we can 1+2e^{-x}+e^{-2x} = (1+e^{-x})^2 and express it in terms of I(0) using the logarithm of a power identity and the substitution u=x/2. This makes it possible to determine c = \frac{\pi^2}{6}, and it follows that

I(-2) = 2\int_0^\infty \ln(1-e^{-x})dx = -\frac{\pi^2}{3}.

This final integral can be evaluated by expanding the natural logarithm into its :

\int_0^\infty \ln(1-e^{-x})dx = - \sum_{n=1}^\infty \int_0^\infty \frac{e^{-nx}}{n}dx = -\sum_{n=1}^\infty\frac{1}{n^2}.

The last two identities imply

\sum_{n=1}^\infty\frac{1}{n^2} = \frac{\pi^2}{6}.


Cauchy's proof
While most proofs use results from advanced , such as , , and multivariable calculus, the following does not even require single-variable (until a single limit is taken at the end).

For a proof using the residue theorem, see here.


History of this proof
The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of and "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka, attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s".; this anecdote is missing from later editions of this book, which replace it with earlier history of the same proof.


The proof
[[File:limit circle FbN.jpeg|thumb|The inequality
\tfrac{1}{2}r^2\tan\theta > \tfrac{1}{2}r^2\theta > \tfrac{1}{2}r^2\sin\theta
is shown pictorially for any \theta \in (0, \pi/2). The three terms are the areas of the triangle OAC, circle section OAB, and the triangle OAB.

Taking reciprocals and squaring gives
\cot^2\theta<\tfrac{1}{\theta^2}<\csc^2\theta.]] The main idea behind the proof is to bound the partial (finite) sums \sum_{k=1}^m \frac{1}{k^2} = \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} between two expressions, each of which will tend to as approaches infinity. The two expressions are derived from identities involving the and functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities.

Let be a real number with , and let be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have \begin{align}

 \frac{\cos (nx) + i \sin (nx)}{\sin^n x} &= \frac{(\cos x + i\sin x)^n}{\sin^n x} \\[4pt]
                                            &= \left(\frac{\cos x + i \sin x}{\sin x}\right)^n \\[4pt]
                                            &= (\cot x + i)^n.
     
\end{align}

From the , we have \begin{align} (\cot x + i)^n = & {n \choose 0} \cot^n x + {n \choose 1} (\cot^{n - 1} x)i + \cdots + {n \choose {n - 1}} (\cot x)i^{n - 1} + {n \choose n} i^n \\6pt = & \Bigg( {n \choose 0} \cot^n x - {n \choose 2} \cot^{n - 2} x \pm \cdots \Bigg) \; + \; i\Bigg( {n \choose 1} \cot^{n-1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg). \end{align}

Combining the two equations and equating imaginary parts gives the identity \frac{\sin (nx)}{\sin^n x} = \Bigg( {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg).

We take this identity, fix a positive integer , set , and consider for . Then is a multiple of and therefore . So, 0 = }}, where v_n = 2n-1 \mapsto \{1,3,5,7,9,\ldots\}.


See also
  • List of sums of reciprocals

  • .
  • .
  • .
  • .


Notes

External links

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs